#load the required library
library(MASS)
#load the data
data("Boston")
Data description:
This dataset is about housing values in suburbs of Boston. More info can be found in: https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html
#explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
#summary of the data
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
The summary shows minimum, mean, median, maximun and first and thrid quantiles for each columns of the dataset
#check dimention of the data
dim(Boston)
## [1] 506 14
As it shows, this dataset has 506 rows (observations) and 14 columns (variables)
# See head (fist 5 rows) of the file
head(Boston)
## crim zn indus chas nox rm age dis rad tax ptratio black lstat
## 1 0.00632 18 2.31 0 0.538 6.575 65.2 4.0900 1 296 15.3 396.90 4.98
## 2 0.02731 0 7.07 0 0.469 6.421 78.9 4.9671 2 242 17.8 396.90 9.14
## 3 0.02729 0 7.07 0 0.469 7.185 61.1 4.9671 2 242 17.8 392.83 4.03
## 4 0.03237 0 2.18 0 0.458 6.998 45.8 6.0622 3 222 18.7 394.63 2.94
## 5 0.06905 0 2.18 0 0.458 7.147 54.2 6.0622 3 222 18.7 396.90 5.33
## 6 0.02985 0 2.18 0 0.458 6.430 58.7 6.0622 3 222 18.7 394.12 5.21
## medv
## 1 24.0
## 2 21.6
## 3 34.7
## 4 33.4
## 5 36.2
## 6 28.7
# See tail (last 5 rows) of the file
tail(Boston)
## crim zn indus chas nox rm age dis rad tax ptratio black lstat
## 501 0.22438 0 9.69 0 0.585 6.027 79.7 2.4982 6 391 19.2 396.90 14.33
## 502 0.06263 0 11.93 0 0.573 6.593 69.1 2.4786 1 273 21.0 391.99 9.67
## 503 0.04527 0 11.93 0 0.573 6.120 76.7 2.2875 1 273 21.0 396.90 9.08
## 504 0.06076 0 11.93 0 0.573 6.976 91.0 2.1675 1 273 21.0 396.90 5.64
## 505 0.10959 0 11.93 0 0.573 6.794 89.3 2.3889 1 273 21.0 393.45 6.48
## 506 0.04741 0 11.93 0 0.573 6.030 80.8 2.5050 1 273 21.0 396.90 7.88
## medv
## 501 16.8
## 502 22.4
## 503 20.6
## 504 23.9
## 505 22.0
## 506 11.9
#see column names
colnames(Boston)
## [1] "crim" "zn" "indus" "chas" "nox" "rm" "age"
## [8] "dis" "rad" "tax" "ptratio" "black" "lstat" "medv"
# plot matrix of the variables
pairs(Boston)
You can see pair-wise plots of each variable (column) with each other and how they were distributed.
library(ggplot2)
library(GGally)
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
p <- ggpairs(Boston, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p
Still some more plots to that show correction of each column (variable) with others and the distribution.
Let’s make also boxplot to visually see the distribution, min, max and quantiles and mean of each variable
boxplot(Boston,
main = "Boxplot of all columns of Boston data",
xlab = "variable name",
ylab = "",
col = "orange",
border = "brown",
horizontal = F,
notch = F)
As the graph shows, the column “tax” has highest variation then black. The values in other columns have quite less variation!
Let’s see the strcutre of data even better using glimpse
# read libraries
library(tidyr); library(dplyr); library(ggplot2)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:MASS':
##
## select
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
# glimpse at the alc data
glimpse(Boston)
## Rows: 506
## Columns: 14
## $ crim <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.088...
## $ zn <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5, 12.5...
## $ indus <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, 7.87,...
## $ chas <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
## $ nox <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524, 0.5...
## $ rm <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172, 5.6...
## $ age <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0, 85.9...
## $ dis <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605, 5.9...
## $ rad <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4,...
## $ tax <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311, 311,...
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, 15.2,...
## $ black <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60, 396...
## $ lstat <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.93, 17...
## $ medv <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, 18.9,...
# use gather() to gather columns into key-value pairs and then glimpse() at the resulting data
gather(Boston) %>% glimpse
## Rows: 7,084
## Columns: 2
## $ key <chr> "crim", "crim", "crim", "crim", "crim", "crim", "crim", "crim...
## $ value <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.08829...
# draw a bar plot of each variable
gather(Boston) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()
The graph shows barplot of each column.
Let’s investigate any possible correlctions within data calculate the correlation matrix and round it.
cor_matrix<-cor(Boston) %>% round(digits = 2)
# print the correlation matrix
"cor_matrix" # I comment this line out to shorten the report
## [1] "cor_matrix"
inspired from http://www.htmlwidgets.org/showcase_datatables.html
library('DT')
datatable(cor_matrix, options = list(pageLength = 10))
Or alternatively set the options to be limiting by ht enumber of rows of the input df. datatable(cor_matrix, options = list(nrow(cor_matrix)))
visualize the correlation matrix
library("corrplot")
## corrplot 0.84 loaded
corrplot(cor_matrix, method="circle", type="upper",
cl.pos="b", tl.pos="d", tl.cex = 0.7)
The graph shows the correlation between variables, the larger the circle (and the darker the color), the higher the correlation is, and color shows if the correlation is positive or negative." "for example, the correlation between nox and dis is strong and negative, ie the increase of one variable, decreases the other one. Or another example is rad and tax, which have strong positive correlation.
On the other hand, chas does not have good correlation with other variables.
Task 4 Let’s standardize the variables suing scale funcation
boston_scaled <- scale(Boston)
As center = True (by defult), now, this scale is done now by cantering is done by subtracting the column means (omitting NAs) of x from their corresponding columns.
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
So, the variables were standardize as explained above. So, the mean of each column were converted to 0 and other values were scales based on that. So, mean of all columns is 0.
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
As you can see, the data class is “matrix” “array”, so we will change the object to dataframe.
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
str(boston_scaled)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num -0.419 -0.417 -0.417 -0.416 -0.412 ...
## $ zn : num 0.285 -0.487 -0.487 -0.487 -0.487 ...
## $ indus : num -1.287 -0.593 -0.593 -1.306 -1.306 ...
## $ chas : num -0.272 -0.272 -0.272 -0.272 -0.272 ...
## $ nox : num -0.144 -0.74 -0.74 -0.834 -0.834 ...
## $ rm : num 0.413 0.194 1.281 1.015 1.227 ...
## $ age : num -0.12 0.367 -0.266 -0.809 -0.511 ...
## $ dis : num 0.14 0.557 0.557 1.077 1.077 ...
## $ rad : num -0.982 -0.867 -0.867 -0.752 -0.752 ...
## $ tax : num -0.666 -0.986 -0.986 -1.105 -1.105 ...
## $ ptratio: num -1.458 -0.303 -0.303 0.113 0.113 ...
## $ black : num 0.441 0.441 0.396 0.416 0.441 ...
## $ lstat : num -1.074 -0.492 -1.208 -1.36 -1.025 ...
## $ medv : num 0.16 -0.101 1.323 1.182 1.486 ...
#head(boston_scaled)
boxplot(boston_scaled,
main = "Boxplot of all columns of scaled Boston data",
xlab = "variable name",
ylab = "",
col = "orange",
border = "brown",
horizontal = F,
notch = F)
So, now the distribution of data is shown better. Yaaay.
# summary of the scaled crime rate
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
As instructed, we shall now use quantile vector of column crim to create a categorical variable of the crime rate in the dataset.
So, let’s start.
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
You can see the quantiles of the column called crim, for example minimum is -0.419366929 and first quantile is -0.410563278.
Now, we can catergorise the continuous (float) values based on the quantiles of data (as instructed).
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
cut() divides the range of x into intervals and codes the values in x according to which interval they fall.
Now, we labelled the crim rate to be “low”, “med_low”, “med_high” or “high” based on the quantile values.
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
Now, you can see that the number of observation divided/categorised to low is 127, medium low: 126, medium high 126 and high 127 cases/observation.
#we can even plot them
plot(crime)
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows as train dataset
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set by selecting those observations (rows) that were not in the training data (using -ind)
test <- boston_scaled[-ind,]
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
Task 5: Fit a linear discriminant analysis model on the train set.
lda.fit <- lda(crime ~ ., data = train)
This uses all variables (columns) of the train dataset to predict crime (the categorised column).
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2475248 0.2524752 0.2450495 0.2549505
##
## Group means:
## zn indus chas nox rm age
## low 0.92406076 -0.9364939 -0.114845058 -0.8812488 0.4549642 -0.9389687
## med_low -0.08957576 -0.2477884 -0.002135914 -0.5522978 -0.1388605 -0.3613283
## med_high -0.38242941 0.1766245 0.244663893 0.3750201 0.1438373 0.4593489
## high -0.48724019 1.0170891 -0.042983423 1.0507740 -0.4638738 0.8235637
## dis rad tax ptratio black lstat
## low 0.9247746 -0.6913090 -0.7313947 -0.49263761 0.3699591 -0.784360033
## med_low 0.3591902 -0.5416255 -0.4324520 -0.06550193 0.3495642 -0.115381389
## med_high -0.4054004 -0.4250387 -0.3339179 -0.33545448 0.0806025 -0.004559293
## high -0.8586924 1.6384176 1.5142626 0.78111358 -0.6665456 0.912165503
## medv
## low 0.54834583
## med_low -0.01763795
## med_high 0.19917575
## high -0.68117585
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.10828406 0.55046448 -0.94850662
## indus 0.08839723 -0.24291805 0.40934946
## chas -0.09898427 -0.06371194 0.08869718
## nox 0.32008480 -0.71011158 -1.26281297
## rm -0.10714935 -0.13975454 -0.12765360
## age 0.23298912 -0.43392967 -0.14157675
## dis -0.03210865 -0.09496148 0.22709369
## rad 3.38755869 0.80061323 -0.29745792
## tax 0.01406837 0.20026886 0.71635447
## ptratio 0.09808551 0.03132521 -0.23979259
## black -0.09349479 0.06100010 0.21757104
## lstat 0.19869235 -0.10670039 0.37003562
## medv 0.17026923 -0.24832142 -0.18639532
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9519 0.0358 0.0123
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
Predict:
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 19 6 2 0
## med_low 6 14 4 0
## med_high 0 7 18 2
## high 0 0 0 24
insprited from https://stackoverflow.com/a/64539733
library('caret')
## Loading required package: lattice
cm <- confusionMatrix(factor(lda.pred$class), factor(correct_classes), dnn = c("Prediction", "Reference"))
ggplot(as.data.frame(cm$table), aes(Prediction,sort(Reference,decreasing = T), fill= Freq)) +
geom_tile() + geom_text(aes(label=Freq)) +
scale_fill_gradient(low="white", high="#009193") +
labs(x = "Reference",y = "Prediction") +
scale_x_discrete(labels=c('low','med_low','med_high','high')) +
scale_y_discrete(labels=c('high', 'med_high', 'med_low', 'low'))
Overall accuracy shows the overall number of correctly classified labels over the incorrect ones.
code inspritred from https://stackoverflow.com/a/24349171
all_accuracies= cm$overall
all_accuracies
## Accuracy Kappa AccuracyLower AccuracyUpper AccuracyNull
## 7.352941e-01 6.473300e-01 6.387064e-01 8.177567e-01 2.647059e-01
## AccuracyPValue McnemarPValue
## 5.152609e-23 NaN
all_accuracies['Accuracy']*100
## Accuracy
## 73.52941
all_accuracies['Kappa']*100
## Kappa
## 64.733
The confusing matirx (graph) shows that for example 25 of class high were predicted correctly as high, but one of the misclassified to med_high.
As the confusion matrix shows, most f high were classified to high.
Task 7: Reload the dataset and calculate the distances between the observations.
# load MASS and Boston
library(MASS)
data('Boston')
# euclidean distance matrix
dist_eu <- dist(Boston)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.119 85.624 170.539 226.315 371.950 626.047
# manhattan distance matrix
dist_man <- dist(Boston, method = 'manhattan')
# look at the summary of the distances
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 2.016 149.145 279.505 342.899 509.707 1198.265
As you can see, mean of the distance between variables is 342.899 (2.02 and 1198.26 as min and max, respectively).
Let’s cluster dataset to 3 classes, in a way that the homogeneity within cluster would be more than between clusters. Clusters are identified by the assessment of the relative distances between points, and, in this example, the relative homogeneity of each cluster and the degree of separation between the clusters makes the task very simple (Source: Multivariate Analysis for the Behavioral Sciences).
km <-kmeans(Boston, centers = 3)
# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)
These pairs of plots show if we cluster the dataset to 3 clusters, then how differentically/discriminately can the variables be from each other.
For example, tax and rad, tax and black, or tax and lstat columns were cluster-able visually. Interestingly, these columns were shown some high correlation between each other.
Another example, the correlation between tax and rad is 0.91 (see below). Therefore, I plot the correlation again in below. FYI: we can plot them in 3D to see visually better. More info: check bonus exercises.
cor(Boston$tax, Boston$rad)
## [1] 0.9102282
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper",
cl.pos="b", tl.pos="d", tl.cex = 0.7)
# Investigate the optimal number of clusters
set.seed(1)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line', xlab= "number of clusters", ylab="Within groups sum of squares")
As the results show in the figure, the total within sum of squares declines sharply by increasing the number of K’s, but after 5, is gets almost steady, so something like 4 or 5 number of clusters is enough.
# k-means clustering
km <-kmeans(Boston, centers = 2) #number of clusters
km
## K-means clustering with 2 clusters of sizes 369, 137
##
## Cluster means:
## crim zn indus chas nox rm age dis
## 1 0.3887744 15.58266 8.420894 0.07317073 0.5118474 6.388005 60.63225 4.441272
## 2 12.2991617 0.00000 18.451825 0.05839416 0.6701022 6.006212 89.96788 2.054470
## rad tax ptratio black lstat medv
## 1 4.455285 311.9268 17.80921 381.0426 10.41745 24.85718
## 2 23.270073 667.6423 20.19635 291.0391 18.67453 16.27226
##
## Clustering vector:
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380
## 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
## 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400
## 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
## 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420
## 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
## 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440
## 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
## 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460
## 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
## 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480
## 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
## 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500
## 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1
## 501 502 503 504 505 506
## 1 1 1 1 1 1
##
## Within cluster sum of squares by cluster:
## [1] 2868770 2896224
## (between_SS / total_SS = 70.3 %)
##
## Available components:
##
## [1] "cluster" "centers" "totss" "withinss" "tot.withinss"
## [6] "betweenss" "size" "iter" "ifault"
Here I analyse different number of clusters by changing the centers = 2) and report below the Within cluster sum of squares by cluster.
By looking at the code, I understood that within cluster sum of squares by cluster is calculated by dividing between_SS to total_SS. So, let’s write that to our loop to get that printed. Hence, I wrote the code to print this as it loops.
centers_list = c(2, 3,4,5,6,7,8,9,10,15,20)
for (i in centers_list){
print('.........')
print(i)
km <-kmeans(Boston, centers = i)
#print(km)
print(km$betweenss/km$totss*100)
}
## [1] "........."
## [1] 2
## [1] 70.28516
## [1] "........."
## [1] 3
## [1] 84.18386
## [1] "........."
## [1] 4
## [1] 90.64774
## [1] "........."
## [1] 5
## [1] 79.60841
## [1] "........."
## [1] 6
## [1] 93.57224
## [1] "........."
## [1] 7
## [1] 94.17725
## [1] "........."
## [1] 8
## [1] 82.01612
## [1] "........."
## [1] 9
## [1] 95.79905
## [1] "........."
## [1] 10
## [1] 82.72624
## [1] "........."
## [1] 15
## [1] 97.35675
## [1] "........."
## [1] 20
## [1] 97.86162
The above number are the number of cluster, and within cluster sum of squares by cluster.
Interpretaion Althogh the accuracy improves as the number of clusters increases, it seems 4 clusters to be fine, because accuracy improved dramatically until there, then get almost steady.
Notably, I think this depends on application and aim of each project. For example, my personal experience in forest mapping using aerial imagery, the increase of number of clusters, often led to horrible maps, thus somewhere between 3-5 is good.
Thus, as explained above, we can use 4 number of clusters to not get too complicated and doable in this exercise. I run this clustering again to get it saved in R object.
km <-kmeans(Boston, centers = 4)
# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)
As figure shows, and explained above, using some columns like tax and black, tax and dist are easily separable.
inspired from https://www.r-bloggers.com/2020/05/practical-guide-to-k-means-clustering/ This can be considered as an alternative way.
wssplot <- function(Boston, nc=15, seed=1234){
wss <- (nrow(Boston)-1)*sum(apply(Boston,2,var))
for (i in 2:nc){
set.seed(seed)
wss[i] <- sum(kmeans(Boston, centers=i)$withinss)}
plot(1:nc, wss, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares")}
# plotting values for each cluster starting from 1 to 9
wssplot(Boston, nc = 9)
Interpretation: As the plot shows, the total within-cluster sum of square declines as number of clusters increases. However, gets steady after 5, Thus, selecting 4 or 5 number of clusters sounds optimal.
library("factoextra")
## Welcome! Want to learn more? See two factoextra-related books at https://goo.gl/ve3WBa
my_data <- scale(Boston)
clus_plot = fviz_nbclust(my_data, kmeans, method = "wss") #method = "wss" is total within sum of square
clus_plot
fviz_nbclust function determines and visualize the optimal number of clusters using different methods: within cluster sums of squares.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
library("plotly")
##
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
library("colorspace")
#red <- hex(HLS(0, 0.5, 1))
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2,
z = matrix_product$LD3, type= 'scatter3d', mode='markers',
color = train$crime, colors = c("blue", "yellow", "black", "red"))
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
Please feel free to Zoom on and out on the 3D figure :) As you can see, we can see and distingush the 4 classes by zooming in the above 3D figure.
require(plot3D)
## Loading required package: plot3D
attach(mtcars)
## The following object is masked from package:ggplot2:
##
## mpg
scatter3D(x = matrix_product$LD1, y = matrix_product$LD2,
z = matrix_product$LD3, pch = 18, cex = 2,
theta = 15, phi = 20, #by chaning the theta and phi, change your view angle to 3d plot
ticktype = "detailed",
xlab = "LD1", ylab = "LD2", zlab = "LD3", clab = "",
colkey = list(length = 0.8, width = 0.4),
main = "3D scatter of clusters", col=c("blue", "yellow", "black", "red"))
Technical note: by chaning the theta and phi, change your view angle to 3d plot.
From this view, we can see that it’s clear to discriminate two cluster from this view (one cluster in right side and other one in left side), however, if we were able to rotate this plot, we could rotate and see other side to discuss more about the 4 possible clusters visually. I made it possible in below extra coding.
inspired from http://www.sthda.com/english/wiki/impressive-package-for-3d-and-4d-graph-r-software-and-data-visualization
# Make the rgl version
library("plot3Drgl")
## Loading required package: rgl
plotrgl()
Please feel free to Zoom on and out on the 3D figure :)
Technical note: You should run your 3d plot then run this code to plot it in interactive zoomable way.
We can see that it is very easy to make 3-4 clusters visually.
FYI: You can even plot a histogram of all columns in 3D and zoom in it: hist3D(z=as.matrix(test))
# plot the lda results
plot(lda.fit, dimen = 3, col = classes, pch = classes)
So, we can see that it is very easy to make 2 clusters visually. and even possible to 4 clusters.
inspired from https://www.r-bloggers.com/2020/05/practical-guide-to-k-means-clustering/
#Related to bouns 2: coloring
library(factoextra)
fviz_cluster(km, data = Boston, main = "Partitioning Clustering Plot")
Technical note: fviz_cluster provides ggplot2-based elegant visualization of partitioning methods including kmeans.
Since the graph is 2D, we cannot rotate it or zoom to see other views of the graph. But generally, it looks good!